On Concentration for (Regularized) Empirical Risk Minimization
نویسندگان
چکیده
منابع مشابه
Distributed Block-diagonal Approximation Methods for Regularized Empirical Risk Minimization
Designing distributed algorithms for empirical risk minimization (ERM) has become an active research topic in recent years because of the practical need to deal with the huge volume of data. In this paper, we propose a general framework for training an ERM model via solving its dual problem in parallel over multiple machines. Our method provides a versatile approach for many large-scale machine...
متن کاملStochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate method, which alternates between maximizing over one (or more) randomly chosen dual variable and minimizing over the primal variab...
متن کاملEfficient algorithm for regularized risk minimization
The recently proposed Optimized Cutting Plane Algorithm (OCA) is an efficient method for solving large-scale quadratically regularized risk minimization problems. Existing open-source library LIBOCAS implements the OCA algorithm for two important instances of such problems, namely, the Support Vector Machines algorithms for training linear two-class classifier (SVM) and for training linear mult...
متن کاملEfficient Algorithm for Regularized Risk Minimization
Many machine learning algorithms lead to solving a convex regularized risk minimization problem. Despite its convexity the problem is often very demanding in practice due to a high number of variables or a complex objective function. The Bundle Method for Risk Minimization (BMRM) is a recently proposed method for minimizing a generic regularized risk. Unlike the approximative methods, the BMRM ...
متن کاملBundle Methods for Regularized Risk Minimization
A wide variety of machine learning problems can be described as minimizing a regularized risk functional, with different algorithms using different notions of risk and different regularizers. Examples include linear Support Vector Machines (SVMs), Gaussian Processes, Logistic Regression, Conditional Random Fields (CRFs), and Lasso amongst others. This paper describes the theory and implementati...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Sankhya A
سال: 2017
ISSN: 0976-836X,0976-8378
DOI: 10.1007/s13171-017-0111-9